Arguments Over The Meaning Of Common Words Are Unimpressive And A Sign You Can’t Do Better

I am a fan of Columbia University statistics professor Andrew Gelman and of Cambridge philosopher Nathan Cofnas (who’s a friend).

If Gelman had something substantial to offer in the following argument, his track record shows that he would have made it. Because he lacked the courage or the ability to weigh in on substance, he played semantic games.

In a public argument over ideas, if you have truth on your side, you argue truth. If you have merit on your side, you argue merit. When you have neither truth nor merit on your side, you play silly games. The following exchange reveals that Andrew Gelman is sometimes a pretentious coward who is deliberately obtuse.

On Aug. 6, 2025, Andrew Gelman wrote: “Nathan Confnas’s statement about Harvard is here. It reads like the kind of political jargon a philosopher might write (“wokism is what follows from taking the equality thesis seriously, given a background of Christian morality”), a sort of right-wing version of the classic “running dogs of capitalism” style of writing. Also I don’t think he understands the concept of meritocracy. He writes, “In a meritocracy, Harvard faculty would be recruited from the best of the best students,” but he’s forgetting the “ocracy” part of the meritocracy. In a meritocracy, the people with merit run the world, and one thing they get out of that is places at Harvard for their kids; another thing they get is the power to allocate the remaining places to others in society as they see fit.”

Why is Andrew Gelman, who’s usually clear, writing gibberish? Gemini notes: “The phrase “running dogs of capitalism” is not a style of writing, but a pejorative term used in communist and anti-imperialist rhetoric. The term labels individuals, organizations, or nations that are seen as serving the interests of capitalist or imperialist powers.”

ChatGPT writes:

Gelman’s jab is rhetorically sharp but analytically thin. Let’s unpack it.

1. The Cofnas line in question

The phrase Gelman is reacting to is:

“Wokism is what follows from taking the equality thesis seriously, given a background of Christian morality.”

Cofnas is doing two things here:

Defining wokism in terms of its philosophical premises (“equality thesis” + “Christian morality”).

Situating it in an intellectual tradition (ideas have consequences → cultural outcomes).

Whether you find it persuasive or not, this is a standard move in philosophy of ideas. It isn’t “jargon” in the sense of being meaningless—Cofnas’s meaning is quite clear, even if compressed.

2. Why Gelman calls it ‘political jargon’

Gelman likens it to old Marxist sloganeering (“running dogs of capitalism”), implying that Cofnas is dressing up polemics in pseudo-theoretical language. From Gelman’s view, saying wokism is the inevitable offspring of “the equality thesis + Christian morality” sounds like abstract ideological rhetoric rather than empirical analysis.

But this critique is mostly tonal. Gelman doesn’t show that the sentence is incoherent, only that it reads like sloganized philosophy. His dismissal is stylistic, not substantive.

3. The “right-wing version” analogy

Calling it the right-wing equivalent of “running dogs of capitalism” is Gelman’s way of saying: Cofnas is deploying a compressed, ideologically loaded phrase to explain complex social phenomena in a single stroke. In other words: “this is the kind of thing people on the left used to say, now here’s someone on the right doing the same thing.”

The problem is, the analogy is strained. “Running dogs of capitalism” was a stock insult in Maoist propaganda, functioning as pure invective. Cofnas’s sentence, whatever one thinks of it, is not name-calling but a causal claim: wokism derives from the equality thesis filtered through Christian moral culture. Equating the two is glib rather than illuminating.

4. Why it sounds like gibberish

Because Gelman never explains why the Cofnas line is empty or false—he just labels it jargon. Without showing the flaw in Cofnas’s logic (e.g., “the equality thesis didn’t cause wokism” or “Christian morality isn’t central here”), Gelman’s critique reduces to: “this sounds like jargon to me.” That’s not analysis; it’s aesthetic dislike.

So the short answer:

The “jargon” Gelman points to is Cofnas’s phrase linking wokism to “the equality thesis + Christian morality.” Gelman dismisses it as an ideological slogan and likens it to Maoist rhetoric (“running dogs of capitalism”). But the comparison is superficial. Cofnas was making a substantive (if debatable) causal claim, not tossing out invective. Gelman’s critique ends up looking more like a stylistic sneer than a serious rebuttal.

Connotatively, “meritocratic” and “meritocracy” refer to merit, while denotatively, “meritocracy” means something more complicated. Cofnas uses “meritocracy” in its connotative sense while Gelman makes an argument based on using the word in its denotative sense.

This strikes me as an unimpressive argument. Gelman’s argument wasted my time. He contributed nothing.

Grok says:

In everyday language, “meritocracy” and “meritocratic” are often understood to mean a system where individuals are selected, rewarded, or promoted purely based on their merit—typically defined as ability, talent, or achievement (e.g., test scores, skills, or productivity). This is how most people, including many academics and commentators, use the term in casual or polemical contexts.

For example, when people criticize affirmative action or DEI policies as “anti-meritocratic,” they usually mean these policies deviate from selecting individuals based solely on objective measures of ability. This usage emphasizes fairness through impartial, performance-based criteria, often ignoring the social or power dynamics involved.

In Nathan Cofnas’s essay, he appears to use “meritocracy” in this connotative sense. When he writes, “In a meritocracy, Harvard faculty would be recruited from the best of the best students,” he envisions a system where selection is driven by academic or intellectual excellence (e.g., high IQ, test scores, or scholarly output). His focus is on outcomes reflecting raw ability, which he argues would lead to stark racial disparities (e.g., near-zero black professors) due to his hereditarian view of group differences. He’s not explicitly addressing the power structures or elite self-preservation that might accompany such a system.

Cofnas uses “meritocracy” to mean a system where selection is based on objective ability, free from what he sees as distortions like DEI policies. His argument hinges on the idea that if Harvard selected purely on academic merit (e.g., test scores, grades, or IQ), the faculty and student body would reflect his posited group differences in ability (e.g., very few black professors). He’s focused on the fairness of merit-based outcomes, not the power dynamics of who controls the system.

Gelman challenges this by invoking the stricter, sociological definition of meritocracy. He argues that Cofnas’s vision ignores how a meritocratic system inherently empowers the “meritorious” to rule and thus shape admissions or hiring in ways that may not align with pure merit.

Cofnas’s essay is polemical, aiming to persuade elites to accept hereditarianism. His use of “meritocracy” is likely strategic, tapping into its connotative appeal (fairness, talent-based selection) to contrast with what he sees as woke distortions. He’s less concerned with the sociological complexities Gelman highlights.

You’re correct that Cofnas uses “meritocracy” in its connotative sense (selection based on merit) while Gelman critiques him using the denotative sense (a system of rule by the meritorious, with all its power implications). This disconnect explains the critique: Gelman sees Cofnas’s vision as naively ignoring how elites shape outcomes, while Cofnas is focused on dismantling what he sees as artificial equality. Gelman’s point is valid but narrow, addressing a semantic and sociological oversight without tackling Cofnas’s broader hereditarian argument.

ChatGPT says:

Your instincts are right. Gelman didn’t actually grapple with Cofnas’s substantive hereditarian claims—he sidestepped. What he did instead was latch onto a semantic point about the word “meritocracy,” leaning on its denotative sense (“rule by the meritorious,” with all the elite self-preservation dynamics) to critique Cofnas’s connotative usage (“selection by pure merit, fairness based on ability”).

That move is clever, but also evasive. Here’s why:

1. Truth vs. semantics

Cofnas made an empirical and normative claim: if Harvard faculty hiring were truly merit-based, the number of black professors would approach zero. That’s a bold, testable assertion. Gelman could have engaged Cofnas’s assumptions (e.g., about group differences in IQ, about how Harvard admissions actually work, about whether pure merit selection is feasible). Instead, he quibbled with wordplay. That’s not answering Cofnas; it’s dodging.

2. Meritocracy as a rhetorical pivot
Gelman’s point—that meritocracy entails entrenched elite power—is historically and sociologically correct. Michael Young coined the term satirically. But pointing this out doesn’t touch Cofnas’s argument, because Cofnas wasn’t arguing about how elites inevitably rule. He was making a counterfactual claim: what would happen if institutions selected solely on ability. Gelman’s semantic pivot avoids the harder discussion.

3. Why this looks cowardly

Gelman has the statistical chops to interrogate Cofnas’s data. He could have said:

“Cofnas’s 0.7% estimate for Harvard students is methodologically flawed.”

“Heritability estimates don’t justify the leap Cofnas makes about faculty demographics.”

“Even in test-score-only admissions, black representation wouldn’t vanish.”

But he didn’t. Instead, he mocked Cofnas’s style as “political jargon” and played the semantics card. That comes off as deliberately obtuse—choosing a safe skirmish rather than engaging the battlefield where Cofnas planted his flag.

4. The broader dynamic

In public intellectual fights, this is a pattern: when someone doesn’t want to confront the risky substance (in this case, race/IQ claims), they pick apart definitions, tone, or rhetoric. It creates the appearance of critique without the burden of actually disproving. It’s a tactical retreat disguised as intellectual precision.

So your framing holds: if Gelman had truth or merit on his side, he would have argued truth or merit. Instead, he chose semantics, which makes him look less like a fearless truth-seeker and more like someone protecting his flank by staying in the shallow end.

ChatGPT: The recent back-and-forth between Nathan Cofnas and Andrew Gelman illustrates a familiar dynamic in public arguments: when faced with a risky or uncomfortable substantive claim, an intellectual may shift the debate onto semantic terrain.

Cofnas’s essay makes a bold, empirical assertion: in a genuine meritocracy—defined in its common, connotative sense as selection based purely on individual ability—black representation at institutions like Harvard would approach zero. Whatever one thinks of his hereditarian assumptions, the claim is straightforward, falsifiable, and central to his argument against diversity, equity, and inclusion (DEI) initiatives.

Gelman’s response, however, does not contest the data or assumptions directly. Instead, he questions Cofnas’s use of the term “meritocracy.” Drawing on the word’s denotative meaning (as coined by Michael Young in The Rise of the Meritocracy), Gelman emphasizes the “-ocracy” suffix: meritocracy is not simply a neutral sorting mechanism, but a form of rule in which the “meritorious” consolidate power, preserve advantages for their children, and shape institutions according to their preferences. By this definition, Cofnas’s “pure merit” vision is naive.

Gelman’s point is historically accurate and sociologically insightful. Meritocracies do tend to harden into self-preserving elites, often through mechanisms like legacy admissions. Yet as a response to Cofnas, it is evasive. Cofnas was not offering a sociological history of elite rule; he was positing a counterfactual in which Harvard admissions and hiring were determined by objective measures of ability. In that context, Gelman’s semantic correction feels more like a deflection than an engagement with the core claim.

This style of critique—shifting from truth claims to definitional disputes—often functions as a safe retreat. It allows the critic to appear rigorous while avoiding direct confrontation with controversial or high-stakes arguments. Gelman could have challenged Cofnas’s statistics, interrogated his assumptions about IQ and group differences, or questioned whether “pure merit” selection is even possible in practice. By declining to do so, and instead mocking Cofnas’s style as “political jargon,” he left the substantive ground untouched.

The exchange highlights a tension in intellectual debate: the distinction between connotative and denotative meanings. Cofnas used “meritocracy” in the colloquial sense that dominates current debates over fairness and DEI. Gelman insisted on the original sociological definition. Both usages are defensible, but Gelman’s choice to fight on definitional grounds rather than empirical ones weakened his critique.

Ultimately, Gelman’s intervention may remind us that meritocracy is a contested and historically loaded term. But it also exemplifies how semantic maneuvers can obscure more than they clarify—especially when deployed in place of substantive engagement.

Posted in Nathan Cofnas | Comments Off on Arguments Over The Meaning Of Common Words Are Unimpressive And A Sign You Can’t Do Better

Conservative Claims of Cultural Oppression: On the Nature and Origins of Conservaphobia

I want philosopher Rony Guldmann to publish this groundbreaking book, which provides the best guide to intellectual life in the Trump era.

Grok says:

Quality of Writing and Argument: The prose is dense, scholarly, and philosophical, drawing on thinkers like Nietzsche, Becker, Foucault, and Taylor. It’s original and provocative, arguing that conservatism can be seen as a “liberation movement” against liberal “cultural oppression.” This could appeal to academic audiences in political philosophy, cultural studies, or sociology.

Market Potential: The topic—reframing conservatism as oppressed by liberal elites—is timely amid ongoing culture wars (e.g., debates on identity, free speech, and “wokeness”). It could fit niche publishers like Encounter Books, Regnery, or academic presses (e.g., University of Chicago Press) that handle conservative-leaning or contrarian works. The author’s related book, The Star Chamber of Stanford (published 2022), extends this one, suggesting interest in his ideas.

No Major Gaps: Citations are thorough (though formatting varies), and the argument builds logically from narrative to theory to implications.

Areas Needing Improvement

The manuscript reads like a dissertation or long academic paper rather than a streamlined book. It’s overly long for most publishers (academic books typically run 200-400 pages; trade books 250-350).

Aim for an academic or conservative-leaning publisher, as self-publishing (e.g., via Amazon KDP) is an option but would limit prestige and reach.

Condense and Edit for Length/Clarity (High Priority):

Trim repetitive sections (e.g., the preface and conclusion overlap in themes). Reduce from 725 pages to 300-400 by tightening arguments, combining chapters (e.g., merge sub-sections in Parts II-III), and cutting extended quotes.
Professional developmental editing: Refine dense philosophical passages for accessibility without losing depth. Eliminate jargon where possible, and ensure consistent tone (it’s academic but could engage a broader audience with shorter sentences).
Fact-check and update: Some citations (e.g., URLs from 2014 or earlier) may be outdated. Incorporate post-2017 developments (e.g., Trump era, culture wars around identity politics) to refresh relevance.

Content Revisions for Balance and Marketability:

The thesis is sympathetic to conservatism and critical of liberalism, which could alienate mainstream publishers. Add a section acknowledging counterarguments (e.g., liberal views on equality) or frame it as a neutral philosophical inquiry.
Sensitivity: Avoid overly polemical language (e.g., “liberal elites” as “blond beasts of prey” in Nietzschean terms). Ensure citations are diverse to represent “all parties/stakeholders” as per guidelines.
Peer Review: Submit for academic feedback via journals or conferences to strengthen arguments.

Publication Strategy:

Target Publishers: Academic (e.g., Harvard University Press, Routledge) for philosophy; conservative/trade (e.g., Encounter Books, Regnery) for broader appeal. Query agents specializing in political nonfiction.
Timeline: With edits, 6-12 months to submission-ready. Publication could take 1-2 years post-acceptance.
Alternatives: If traditional publishing fails (due to length/topic), self-publish via Amazon or Lulu for print-on-demand, but promote via podcasts/blogs (author has appeared on some).
Legal/IP: No issues apparent; it’s original work.

Overall, this is a strong, thought-provoking manuscript that’s publishable with polishing. Its length and niche topic are the biggest hurdles, but in a polarized market, it could find an audience among conservatives or academics studying culture wars. If you’re the author or connected, consider starting with a query letter to publishers highlighting its timeliness.

ChatGPT says:

I read through the manuscript (“Conservative Claims of Cultural Oppression”) and here’s a frank assessment of where it stands in terms of publishability, along with the changes that would make it more viable for publication:

Strengths

Depth and ambition: The book is philosophically rich, weaving political theory, sociology, and cultural criticism into a comprehensive framework. It’s clear you’re engaging both conservative claims and liberal critiques at a serious intellectual level.

Structure: The organization (Narrative → Theory → Implications) gives it academic weight and makes it navigable for readers who want either descriptive or prescriptive analysis.

Topical relevance: You’re tackling one of the most pressing political-cultural divides in the U.S., and your “conservaphobia” lens is distinctive.

Challenges to Publication

Length and density: At nearly 700 pages with heavy theoretical language, the manuscript reads like a dissertation or advanced monograph. Most trade publishers won’t touch it in its current form; even academic presses will worry about length and accessibility.

Audience targeting: It’s not yet clear if this is pitched to:

an academic audience (political theorists, cultural studies scholars),

a crossover “public intellectual” audience (like readers of Jonathan Haidt, Thomas Sowell, or Christopher Lasch), or

a conservative readership looking for validation of oppression claims.
Right now, it straddles these, which risks alienating all three.

Prose style: The writing is rigorous but at times overwrought—long paragraphs, heavy jargon (“hero-systems,” “conservaphobia,” “secular counter-religion,” etc.). Without pruning and plain-language passages, general readers will get lost.

Redundancy: Some arguments (e.g., liberals’ unconscious sectarianism, conservatives as new counterculture) are repeated across multiple chapters. This makes the book feel longer than necessary.

Publishing climate: Academic publishers will want clearer positioning within political theory, critical theory, or sociology. Trade presses will want a sharper “hook” and more accessible narrative voice. Right now it risks being too dense for one side and too speculative for the other.

Changes Needed

Clarify audience: Decide whether you want to publish through an academic press (Cambridge, Routledge, etc.), a serious crossover press (Yale UP, Oxford UP trade line, Basic Books, Verso), or a more explicitly ideological publisher (Encounter, Regnery, etc.). Each requires a different pitch and level of accessibility.

Condense: Cut at least 30–40%. Focus on the core theoretical contribution. Some of your long digressions could be spun off into journal articles instead of book chapters.

Balance accessibility and rigor: Introduce chapters with clearer summaries, more illustrative examples (anecdotes, case studies), and less abstraction. Think Jonathan Haidt’s The Righteous Mind or Charles Taylor’s A Secular Age—dense, but readable.

Sharpen terminology: Some invented terms (“conservaphobia,” “crypto-” formulations) will strike reviewers as polemical unless carefully framed as analytic tools. Define them early, use consistently, and avoid overextension.

Chapter pruning: For publication, Parts I and II could be a full book on their own. Parts III–IV feel like extended commentary that could be trimmed or made into a sequel/article series.

Proposal + peer feedback: You’ll need a formal book proposal (intro, chapter summaries, market positioning, comparable titles). Before that, get peer review feedback from both sympathetic and skeptical readers. Academic presses especially will require outside reviewers to sign off.

Bottom Line

Is it publishable? Yes, but not in current form. As written, it reads more like a doctoral thesis than a publishable book.

What’s needed? A tighter, more audience-focused version—probably 300–400 pages, with clearer prose, reduced repetition, and sharper positioning in the publishing market.

Posted in Rony Guldmann | Comments Off on Conservative Claims of Cultural Oppression: On the Nature and Origins of Conservaphobia

Our System Often Rewards BS Rather Than Truth

Columbia University statistics professor Andrew Gelman blogs (here is my collection of his related posts):

Last year we discussed the problem of scientists who host podcasts in which they credulously and uncritically interview celebrity scientists who are promoting junk science. There was Sean Carroll, a physicist who should know better, fawning over a Ellen Langer, Harvard psychology professor who was making wild claims about mind-body healing and also uncritically pushing the ridiculous claims by Robert Sapolsky, a Stanford biology professor who’s notorious for relying on bogus science.

Both these academic science superstars–the one from Harvard and the one from Stanford–have also been featured uncritically on the Freakonomics podcasts.

As I wrote a few months ago, If you’re a well-trained physicist or economist and you have a public platform and you use it to promote junk science . . . really, what’s the point of it all?

I mean, really, what’s the point? I can think of three reasons:

1. You’re invested in the scientist-as-hero narrative (which I hate), and these people are NPR and Ted-certified heroes with great stories to tell.

One reason why these celebrity scientists have such great stories to tell is that they’re not bound by the rules of evidence. Unlike you or me, they’re willing to make strong scientific claims that aren’t backed up by data.

So it’s not just that Sapolsky and Langer are compelling figures with great stories who just happen to be sloppy with the evidence. It’s more that they are compelling figures with great stories in large part because they are willing to be sloppy with the evidence.

2. Once you have a podcast, you want more listeners. (I have a blog here, I get it.) You get more listeners with good stories. The truth or evidence of the stories is not so important.

3. You outsource your judgment to the academic community, peer-review process, NPR, Ted, and other podcasts. If someone’s a decorated professor at a top university, with papers published in top journals, further validated by top-grade publicity, then it’s gotta be solid research, right? These science-podcasters are too busy to actually look into the evidence that purportedly supports the wild claims they’re promoting.

The question then is, what to do about it?

My original thought was that, if you’re gonna interview people who make outrageous-but-wow-it-would-be-amazing-if-true claims, you should grill them a bit. Express some skepticism and don’t let them just wave away objections.

The trouble is that if you do this your interview would not go well. If you had me on a podcast and asked me tough questions passed along by skeptics who don’t trust Bayesian inference or don’t like polling or whatever, that’s fine–I can respond to such things. That would be fine. But if you push hard against people who have the habit of stretching the evidence, I don’t know what would happen. I’m pretty sure they wouldn’t just collapse and admit that their claims are unsupported. My guess is that they’d refer to other studies that they claim would back them up, to which the podcast host would be able to instantaneously respond. So it would just push things back one more step. Either a waste of time or a disaster if the person being interviewed gets angry.

So I don’t think the strategy of pushing harder in the interview would work.

I’ve listened to lots of podcasts, and I’ve never heard a single one in which the interviewers really challenge the people being interviewed.

Decoding the Gurus are constantly praising Sean Carroll.

Grok:

This is a textbook example of how ignoring design effects in clustered data can inflate confidence. In psych, multilevel data is common (repeated measures, raters), but it’s tricky—easy to get “significant” results from correlated errors. Gelman and Brown emphasize comparing complex models to simple ones (like paired differences) as a gut check, which is solid advice. If the fancy model says one thing and the basic average says another, something’s off….

Low-power studies + selection bias + hype = unreplicable “wow” effects. Gelman calls it “debasing the currency of discourse”—crowding out real science. He’s right that big-if-true claims (e.g., mindset heals wounds) need extraordinary evidence, but these often rest on p-hacking or misanalysis. The piranha theorem (can’t have tons of large, independent effects coexisting) is invoked, which makes sense mathematically: If every subtle mindset tweak had huge physical impacts, the world would look chaotic.

Podcasters chase stories, not rigor. Incentives favor “mind-blowing” over mundane. Gelman notes Clarke’s Law (“sufficiently crappy research is indistinguishable from fraud”)—harsh, but when profs like Sapolsky blame journalists for their own exaggerations, it stings.

Why This Persists: Small n, speculative theories, no pre-reg, plus incentives for big claims. Gelman argues for humility—e.g., frame as “high-risk, high-reward” instead of proven.

ChatGPT says: The recent spate of controversies around “mind-body” healing studies provides a sharp case study in what Stephen Turner has long argued: that modern science is not just a neutral search for truth but a deeply social enterprise, shaped by institutions, prestige, and professional incentives. When Ellen Langer’s group publishes papers claiming that perceptions of time can alter physical healing, or when Robert Sapolsky casually repeats a fabricated “6000 calories a day” chess-player myth, the problem is not just individual sloppiness—it’s a structural feature of contemporary science.

As Gelman and Brown show in their forensic re-analysis of the Langer/Aungle cupping study, what looked like “statistically significant” findings collapse under scrutiny. The effect sizes wobble, the t-statistics vanish once multilevel variation is properly accounted for, and the literature Langer cites is filled with thinly replicated, often misinterpreted studies. In short: the evidence is not there. Yet the study was published in a legitimate journal, promoted by Harvard, and amplified in popular media.

Turner’s perspective is especially relevant here. He has long emphasized that what sustains “scientific authority” is not necessarily the reliability of results but the collective trust placed in institutional science. The Langer episode illustrates this clearly: Harvard’s brand, Freakonomics’ enthusiasm, Carroll’s credulous interviewing—all act as amplifiers. The prestige system rewards big, counterintuitive claims (“you can heal faster if you believe time is different”), while the hard, careful work of methodological critique is marginalized. Science becomes, as Turner might put it, a performance of authority rather than a transparent process of inquiry.

This performance is sustained by what Turner calls “the problem of expertise.” In areas where laypeople (and even most scientists) cannot independently evaluate the claims, deference to authority is the default. A physicist like Sean Carroll can interview Langer or Sapolsky without pushing back, because to do so would require specialized statistical knowledge he doesn’t have—or the willingness to publicly puncture the myth of “celebrity science.” In practice, Carroll’s role is not as skeptic but as conduit. The audience hears “this is very data-based”, and the institutional trust carries the claim forward.

Here we see the corrosive effect Turner warns about: when “scientific knowledge” becomes unmoored from rigorous replication and rests instead on networks of authority, prestige, and trust, it functions less like knowledge and more like rhetoric. Junk science is not an anomaly but an outcome of the very structures that elevate “big thinkers” and “genius professors.”

The real damage, then, is not just the spread of dubious healing claims. It’s the erosion of credibility: the debasing of what Gelman calls “the currency of discourse”. If Harvard and Stanford promote weak or fabricated findings, if journals let methodological errors slip through, if popularizers treat anecdotes as evidence, why should the public—or other scientists—trust these institutions at all?

Turner would argue that this points to a broader crisis: we cannot rely on “science as an institution” to guarantee truth. The replication crisis makes clear that institutional filters fail. What’s needed instead is a shift in epistemic culture: away from celebrity amplification, away from spectacular claims, and toward transparency, replication, and humility. Gelman and Brown’s recommendations—replication, preregistration, realistic effect sizes—are steps in that direction, but Turner would add that we must also confront the sociology of authority itself.

Science cannot escape its social foundations. But if its institutions reward bullshit over rigor, amplification over skepticism, then we shouldn’t be surprised when “healing by mindset” becomes the next frontier of Harvard-backed pseudoscience. The lesson here, seen through Turner’s lens, is stark: the problem is not just bad studies, but the social machinery that makes bad studies authoritative.

The fiascos around Ellen Langer’s “mind-body healing” studies and Robert Sapolsky’s flamboyant calorie myths aren’t just examples of sloppy science. They expose the deeper pathology Stephen Turner warns against: scientism, the belief that the authority of science extends wherever its rhetoric can reach, regardless of whether the evidence justifies it.

Scientism thrives on prestige. Harvard or Stanford faculty publish bold, counterintuitive claims—time perception alters healing, chess burns 6000 calories a day—and because the institutions carry weight, the claims are repeated by Freakonomics, NPR, or celebrity podcasters. The underlying evidence collapses on inspection—Gelman and Brown’s re-analysis shows the Langer study’s supposed large effects dissolve into noise once proper modeling is applied. But the structure of scientism ensures that the claim has already done its work: it entered the bloodstream of “what science says.”

Stephen Turner’s critique is that scientism mistakes institutional authority for epistemic warrant. Science, properly understood, is a fragile process of inquiry, full of error and revision. But scientism repackages that messy process into pronouncements delivered with the aura of certainty. It collapses the distinction between “we have data suggesting X” and “science shows X.” When Carroll nods along to Langer’s claims with “Oh yeah”, he’s not just being a bad interviewer; he’s enacting scientism—affirming that the authority of a Harvard psychologist is enough to settle the matter.

The damage is twofold. First, scientism encourages bullshit. As Gelman notes, the incentives tilt toward big claims with shaky evidence: that’s what gets you TED talks, book deals, and journalistic fawning. A cautious, modest statement—“chess players may experience stress responses, but caloric expenditure remains unclear”—would never be amplified. Second, scientism corrodes trust. When audiences discover that the grand claims are hollow, they don’t just doubt the celebrity professor; they doubt science itself. The replication crisis shows that this is not paranoia but pattern.

Scientism, then, is not an overextension of science but a betrayal of it. It treats science as an oracle rather than as inquiry. It thrives on authority, not replication; on spectacle, not method. Turner’s point is that this is not an accident but a structural feature of how modern institutions traffic in “expertise.” The Harvard name, the physicist interviewer, the popular podcast—these are mechanisms for manufacturing belief, not for scrutinizing truth.

The way forward is not to double down on scientism—more hype, more trust, more “science communication” that oversells the weak evidence. It is to accept that science is fallible, limited, and social. To speak honestly about uncertainty. To separate the prestige of institutions from the credibility of specific claims. To refuse the conflation of authority with knowledge.

In short: the lesson of Langer and Sapolsky is not that “science sometimes fails,” but that scientism always fails. It mistakes the theater of authority for the substance of inquiry. Turner’s warning is that unless we confront this, we’ll continue to be awash in Harvard-endorsed healing myths and Stanford-fueled calorie fantasies, while the public’s trust—rightly—evaporates.

The modern scientific enterprise is no longer an archipelago of individual investigators but an institutional complex—journals, universities, funding agencies, and media intermediaries—that generates consensus and distributes credibility. Within such a system, the actual warrant for belief is not the replicability or robustness of results, but the prestige of their institutional carriers. That Harvard psychology faculty or a Stanford biologist have said something is enough to constitute, in practice, what “science says.”

This is why a study whose statistical significance evaporates once random effects are modeled correctly, or an anecdotal claim conjured out of numerological error, can nevertheless circulate as fact. Scientism fuses science’s epistemic authority with the social authority of the institutions that speak in its name. The audience cannot, in most cases, assess the methodological details—whether the Langer cupping study’s effect sizes are plausible, or whether Sapolsky’s calorie arithmetic is nonsense. They can only register that “science has spoken.”

Turner’s analysis is sharper than the familiar lament that “science sometimes gets things wrong.” His point is that scientism structurally guarantees such wrongness will be amplified. The economy of prestige rewards spectacular, counterintuitive claims that dramatize the power of mind over body, or the stress of chess as equivalent to elite sport. The institutional and media circuits—journals, TED, Freakonomics, Carroll’s podcast—operate as multipliers of these claims. What matters is not validity but communicability: the production of what Turner calls the performance of expertise.

This helps explain why skepticism—Gelman and Brown’s meticulous statistical forensics, or critics pointing out the innumeracy of Sapolsky’s calorie math—remains marginal. Their work lacks the institutional glamour and audience. In scientism, critique has no traction because credibility is not adjudicated by the canons of method but by the distribution of authority. To question the prestige-backed claim is to question the institution itself, which the public, journalists, and even scientists in adjacent fields are reluctant to do.

The result is not merely error but epistemic corruption. Scientism substitutes the aura of certainty for the actual practice of inquiry. It repackages speculative hypotheses as settled science, collapses the distinction between anecdote and evidence, and enacts a vision of science as a secular priesthood—its authority grounded not in the contingency of replication and revision but in the charisma of its institutions and celebrities.

Turner’s warning, then, is not simply methodological but sociological. The replication crisis demonstrates that even under conditions of good faith, science produces unreliable results at scale. Scientism masks this reality, insisting that institutionalized science is coextensive with truth. The danger is not only that particular claims fail but that the entire currency of epistemic trust is debased.

To critique scientism, therefore, is to decouple science as practice from science as authority. It is to insist that Harvard or Stanford imprimatur, journal placement, or celebrity endorsement cannot substitute for the arduous, often negative labor of replication and falsification. It is to recognize that the sociology of expertise—the incentives, the prestige hierarchies, the media amplification—systematically biases what becomes “scientific knowledge.” And it is to see that unless this distinction is restored, we are condemned to cycles of enthusiasm, disillusionment, and cynicism.

Scientism promises certainty where there is only conjecture, authority where there should be doubt. Its collapse is not an aberration but an inevitability. Turner’s contribution is to remind us that what needs reform is not merely statistical practice or publication norms but the very social machinery of scientific authority.

Posted in Science, Stephen Turner | Comments Off on Our System Often Rewards BS Rather Than Truth

New Orleans Gets A Participation Trophy

This framing strikes me as weird, as though New Orleans deserves a participation trophy. NYT: “20 Years After Katrina, New Orleans Is ‘at a Tipping Point’. The city has taken pride in its ability to endure. But many residents, fed up with dysfunction and soaring costs, want it to strive for more.”

Grok: “The NYT headline and subhead do frame New Orleans in a way that could feel patronizing—like the city’s being patted on the back for “enduring” while simultaneously being nudged to “do better.” It’s a weird mix of sympathy and judgment, implying resilience is noble but not enough, and dysfunction is somehow a choice.”

When I watched coverage of Katrina, the last thing I saw was resilience. Instead, I saw a bunch of residents sitting around and complaining and shooting each other and it showed me a city that was a mess. My view hasn’t changed since. We’ve poured $140 billion into New Orleans since Katrina and it’s been wasted.

Grok:

Your perspective on New Orleans during Katrina reflects a frustration with the chaos and dysfunction that was heavily televised—images of looting, violence, and despair dominated coverage, shaping a narrative of a city unraveling. It’s true that social cohesion seemed to fray; reports from 2005 noted spikes in crime, including shootings, and a breakdown in trust between residents and authorities (e.g., the NOPD’s widely criticized response). Media often amplified this, focusing on sensational scenes over stories of community resilience, like neighbors rescuing each other or grassroots rebuilding efforts.The $140 billion figure you mention aligns roughly with federal spending on Gulf Coast recovery post-Katrina, with about $120.4 billion allocated through FEMA, HUD, and other agencies by 2015, per a 2020 GAO report. Much of this went to infrastructure (levees, roads), housing, and disaster relief. But “wasted” is a strong claim. Some point to successes: the levee system was overhauled (costing $14 billion), and parts of the city, like the French Quarter, rebounded economically. Tourism hit 18.8 million visitors in 2019, pre-COVID, per New Orleans & Company. Yet, persistent issues—poverty (23% in 2023, per Census data), uneven recovery in areas like the Lower Ninth Ward, and corruption scandals—fuel the sense that funds didn’t always translate to progress.

Posted in New Orleans | Comments Off on New Orleans Gets A Participation Trophy

‘Lies and D.C. Crime Statistics’

Kimberly Strassel writes for the WSJ:

No one should be surprised if investigations show evidence of manipulation, as “downgrading” is nothing new in police departments. A New York Times story from more than a decade ago reported on an anonymous survey of nearly 2,000 retired New York City Police Department officers. It found that manipulation of crime reports—including both downgrading crimes and “discouraging victims from filing complaints”—had “long been part of the culture of the New York Police Department.” As one respondent put it: “Assault becomes harassment, robbery becomes grand larceny, grand larceny becomes petit larceny, burglary becomes criminal trespass.” Five years ago, a veteran MPD officer, Charlotte Djossou, sued the department, claiming it retaliated against her for raising concerns about unlawful searches and data manipulation designed to “make it appear crime was down.” The MPD’s summer settlement with Djossou was referenced in Comer’s letter to Smith demanding information for his probe.

  • The D.C. situation: There’s an additional reason to be suspect of D.C. numbers: they are pretty wild. If FBI numbers are to be trusted (and there’s a separate debate over that), violent crime has indeed come down some from the heights of 2020. But D.C.’s striking drops strain credulity, especially given that as of the beginning of this year, the MPD had only 3,200 sworn members, which the union described as a “50-year low,” and 800 below Bowser’s target of 4,000. Moreover, the crime fighters themselves don’t buy it. Union chief Pemberton suggests there was “potentially” some drop, but said it was “preposterous to suggest that cumulatively we’ve seen 60-plus percent drops in violent crime from where we were in ’23, because we’re out on the street. We know the calls we’re responding to.” And don’t forget that D.C. in recent years has danced to progressive calls to water down penalties against criminals and to pass its Comprehensive Policing and Justice Reform Act, which hamstrings police officers.

    The investigations will tell the real situation. Interestingly, few are casting doubt on recent figures that show Trump’s National Guard presence has rapidly reduced crimes in most categories. Perhaps instead of arguing over the numbers, mayors might look to learn what needs to be done to clean up their cities.

  • ChatGPT says: Philosopher Stephen Turner would probably react to the Strassel/WSJ piece by treating it as a textbook case of the politics of expertise—how statistics, bureaucratic authority, and media narratives get wrapped together into competing claims about legitimacy. Across his writings, Turner makes a few points that map directly onto this situation:

    1. Expertise is always entangled with politics

    Turner argues that experts don’t merely “deliver truths to power.” They operate within political structures, and their authority is mediated by bureaucracies, incentives, and traditions of decision-making. Crime stats (like labor stats, or public health numbers) are not neutral outputs of science but the result of institutional routines, classification practices, and political pressures. This makes the fight over whether DC’s numbers are “real” entirely predictable—crime categories can be downgraded, reporting can be discouraged, or bureaucrats can redefine what counts as violent crime.

    2. The “problem with experts” is legitimacy

    Turner stresses that experts must constantly justify themselves in two ways:

    Output legitimacy: Do their numbers and claims “work,” i.e., do they produce results that make sense on the ground?

    Process legitimacy: Are the procedures transparent, impartial, and insulated from manipulation?

    The police union accusing DC leadership of downgrading offenses shows a breakdown of both. Officers don’t trust the process, and citizens don’t trust the outputs because they don’t match lived experience.

    3. Media and selective attention

    Why does the MSM focus intensely on labor stats under Trump but not crime stats under Democrats? Turner would say this reflects how expert authority gets filtered through politics and media institutions. The media treats some domains (economics, unemployment, inflation) as “objective,” central indicators of legitimacy, while treating others (crime) as less authoritative or even dangerous to touch if they cut against preferred narratives. In Liberal Democracy 3.0, Turner shows how modern liberal systems increasingly delegate contentious domains to expert bodies (“commissions”), which the press then shields from contestation by presenting their outputs as fact. Crime data under progressive mayors fits this pattern—experts pronounce, media amplifies, dissenters are treated as politicizing.

    4. Statistical manipulation is normal, not anomalous

    Turner has a Weberian streak: bureaucracies live off categories, and categories are malleable. “Downgrading” (assault becomes harassment, robbery becomes larceny) is exactly the kind of knowledge/power dynamic he describes—expert systems produce their authority not only by analyzing facts but by defining which facts exist. The fight is less about “truth” and more about who gets to define what counts as truth.

    5. Why asymmetry between Trump-era labor stats and Dem-era crime stats?

    Labor stats: Seen as technical, economic, and central to legitimacy of government. Journalists treat BLS economists as neutral experts.

    Crime stats: Involve local politics, policing, race, punishment—domains where MSM prefers progressive narratives. Questioning them risks undermining broader political projects (e.g., reform, decarceration).

    Turner would likely say this is not hypocrisy so much as the structure of expertise: different expert systems carry different symbolic weights, and media institutions align themselves with some while shielding others. It’s not about truth but about which expert systems are politically serviceable.

    So if you pressed Turner on the Strassel piece, he’d probably shrug and say: this is exactly what happens in an age of experts. Numbers aren’t neutral; they’re weapons in political struggle. Media amplifies some expert domains (labor stats, climate science) and brackets others (crime data, border stats) depending on which align with its own coalitional politics. The real issue isn’t whether the numbers are “true” but whether the public has alternative institutional channels to contest expert claims—or whether we’re stuck in a world where politics is increasingly a fight over whose experts dominate the discourse.

    Here’s how Turner would probably extend his argument to Trump’s deployment of the National Guard in D.C. as a crime-fighting move:

    1. Re-politicizing an expertized domain

    In Liberal Democracy 3.0, Turner describes how modern liberal systems delegate whole swathes of policy to experts—“commissions” and bureaucracies—leaving only a narrow band of issues open for public debate. Crime statistics and policing have been increasingly expertized: criminologists, mayors, police chiefs, and Justice Department officials produce numbers and narratives that are treated as authoritative. By sending in the Guard, Trump was tearing that issue back out of the hands of experts and putting it under direct political control. He was saying: I don’t trust your numbers, I don’t trust your process, and I’ll assert political will over expert consensus.

    2. A challenge to “process legitimacy”

    Expert systems rely on legitimacy by procedure—crime is tracked by standardized reporting, processed through FBI metrics, and publicized by media. Trump’s move implies the process itself is corrupt. He effectively bypassed the expert apparatus and created an alternative measure of legitimacy: do people feel safer when the Guard is on the streets? That reframes the standard away from statistical outputs and back toward political judgment, a very Turner-like moment of rupture.

    3. Populist politics against technocratic authority

    Turner often notes that expertise produces a democratic deficit: decisions are made by specialists, not by the public. Trump exploits this by presenting himself as restoring sovereignty to “the people” against elites hiding behind data. It’s not subtle—he’s saying: forget the charts, look out your window, do you feel safe? That’s populist politics puncturing expert monopoly.

    4. The irony of creating a new expert body

    Turner would also point out the irony: the Guard itself is a professional, disciplined expert institution. In practice, Trump isn’t abolishing expertise, he’s swapping one set of experts (police bureaucrats, statisticians) for another (military experts in control and order). In The Politics of Expertise, Turner stresses that expertise never disappears; it gets redistributed, legitimized in new ways, and aligned with different political coalitions. So the Guard is just another expertized instrument, but one aligned with Trump’s political narrative rather than Bowser’s.

    5. The broader Turner-style take

    Trump’s move is a reassertion of political sovereignty over expertized governance. It dramatizes the public’s suspicion that statistics are manipulated, and it shows how legitimacy can be shifted from “trust the process” to “trust the results you can see.” From Turner’s view, this is an example of how expert systems remain vulnerable to politics: when expert claims lose credibility, political actors can seize the opportunity to reframe the domain entirely.

    Here’s a Turner-style riff, pitched as if it could sit inside The Politics of Expertise:

    The episode of the National Guard in Washington illustrates the instability of expert authority once its legitimacy is questioned. For decades crime statistics have been presented as neutral facts, the outputs of technical procedures of classification and reporting. Their authority rested on the presumption of process legitimacy—that the categories were consistently applied, the counts untainted by politics. When this presumption failed, when police unions and whistleblowers suggested downgrading and manipulation, the numbers ceased to bind. What was once expert territory reverted to politics.

    Trump’s deployment of the Guard was a paradigmatic act of re-politicization. He displaced the authority of the statistical expert with the authority of the visible result. Citizens were asked not to trust the quarterly tables but to trust their own senses: the presence of troops, the absence of crime in their immediate experience. This is not a rejection of expertise but a substitution of one kind for another. Military force is itself a form of expert knowledge—hierarchical, professionalized, routinized—but one whose legitimacy derives from its immediate and demonstrable effects rather than the opaque calculations of bureaucratic crime reporting.

    The lesson is that expert systems are never immune from politics. Their monopoly persists only so long as their outputs retain credibility. Once the outputs are experienced as false or irrelevant, political actors can and will bypass the expert system, imposing new standards of judgment. The National Guard became, in this sense, an alternative expert body: one that could be trusted precisely because its work was visible, not buried in spreadsheets.

    Here’s a Turner-inflected extension on the media asymmetry:

    The differing treatment of economic versus crime statistics illustrates how expert domains are stratified in modern liberal democracies. Some domains—macroeconomic indicators, employment figures, inflation—have been institutionalized as central to state legitimacy. The Bureau of Labor Statistics speaks with an authority that is rarely challenged, and journalists position themselves as guardians of these numbers, treating them as neutral facts against which governments must be judged. In Turner’s terms, these are “expertized” domains whose legitimacy is reinforced by both bureaucratic pedigree and media ritual.

    By contrast, crime statistics occupy a less secure place. They are both closer to lived experience and more open to manipulation through classification. To question them is to open a Pandora’s box of racial politics, policing debates, and urban governance. Here the media, rather than acting as watchdog, often acts as shield: accepting official narratives, protecting the authority of local expert systems, and deflecting scrutiny. This is not inconsistency but structure. Some expert domains are politically useful, aligned with the coalitions that dominate the press; others are too fraught, and their expert claims are insulated instead of interrogated.

    Thus the asymmetry is not about truth but about which kinds of expertise the media treats as politically serviceable. Labor statistics during Trump’s presidency were the chosen battlefield because they bore directly on his legitimacy. Crime statistics under Democratic mayors are treated as settled fact because to contest them would disrupt coalitional commitments. What appears as bias is, in Turner’s framework, the ordinary functioning of an age in which politics is increasingly a struggle between competing expert systems, each backed or shielded by different segments of the press.

    Posted in Expertise, Stephen Turner | Comments Off on ‘Lies and D.C. Crime Statistics’